On convergence of a global search strategy for reverse convex problems

نویسنده

  • Alexander S. Strekalovsky
چکیده

Nowadays specialists on optimization observe the persistent demands from the world of applications to create an effective apparatus for finding just a global solution to nonconvex problems in which there may exist local solutions located very far from a global one even up to the values of goal function. As well-known, the conspicuous limitation of convex optimization methods applied to nonconvex problems is their ability of being trapped at a local extremum or even a critical point depending on a starting point. In other words, the classical apparatus shows itself inoperative for new problems arising from practice. That is why, the development of nonconvex optimization took the way of tools initially generated in discrete optimization (DO) and for discrete optimization. So, DO gave some apparatus for continuous optimization (CO). Gradually, Branches & Bounds, cuts, and so forth ideas became very popular in nonconvex area of CO, although in some cases its turned out to be too much sensitive, for instance, with respect to the changing the size of a problem. In [8, 9, 10, 11, 12] it was proposed other approach to d.c. programming problems based on global optimality conditions (GOC) that has proved its effectiveness for numerous continuous (even dynamical) and discrete optimization problems [11, 13, 14, 15]. Nevertheless, the theoretical substantiation of the approach stays in some cases uncompleted. This concerns, in particular, convergence proofs for global search strategies based on GOC. This paper aims especially to full the lacuna for RCP. On the other hand, we demonstrate below the importance of new notion in optimization—the notion of resolving set, or a “good” approximation of the level surface of a convex function g(·). The crucial influence of the approximation on results of global search was transparently shown in [11, 12, 14, 15] by computational experiments. Here we would like

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Modify the linear search formula in the BFGS method to achieve global convergence.

<span style="color: #333333; font-family: Calibri, sans-serif; font-size: 13.3333px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: justify; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: #ffffff; text-dec...

متن کامل

Global convergence of an inexact interior-point method for convex quadratic‎ ‎symmetric cone programming‎

‎In this paper‎, ‎we propose a feasible interior-point method for‎ ‎convex quadratic programming over symmetric cones‎. ‎The proposed algorithm relaxes the‎ ‎accuracy requirements in the solution of the Newton equation system‎, ‎by using an inexact Newton direction‎. ‎Furthermore‎, ‎we obtain an‎ ‎acceptable level of error in the inexact algorithm on convex‎ ‎quadratic symmetric cone programmin...

متن کامل

A Recurrent Neural Network for Solving Strictly Convex Quadratic Programming Problems

In this paper we present an improved neural network to solve strictly convex quadratic programming(QP) problem. The proposed model is derived based on a piecewise equation correspond to optimality condition of convex (QP) problem and has a lower structure complexity respect to the other existing neural network model for solving such problems. In theoretical aspect, stability and global converge...

متن کامل

A Differential Evolution and Spatial Distribution based Local Search for Training Fuzzy Wavelet Neural Network

Abstract   Many parameter-tuning algorithms have been proposed for training Fuzzy Wavelet Neural Networks (FWNNs). Absence of appropriate structure, convergence to local optima and low speed in learning algorithms are deficiencies of FWNNs in previous studies. In this paper, a Memetic Algorithm (MA) is introduced to train FWNN for addressing aforementioned learning lacks. Differential Evolution...

متن کامل

Constrained Nonlinear Optimal Control via a Hybrid BA-SD

The non-convex behavior presented by nonlinear systems limits the application of classical optimization techniques to solve optimal control problems for these kinds of systems. This paper proposes a hybrid algorithm, namely BA-SD, by combining Bee algorithm (BA) with steepest descent (SD) method for numerically solving nonlinear optimal control (NOC) problems. The proposed algorithm includes th...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • JAMDS

دوره 9  شماره 

صفحات  -

تاریخ انتشار 2005